211 research outputs found

    Stochastic Intermediate Gradient Method for Convex Problems with Inexact Stochastic Oracle

    Full text link
    In this paper we introduce new methods for convex optimization problems with inexact stochastic oracle. First method is an extension of the intermediate gradient method proposed by Devolder, Glineur and Nesterov for problems with inexact oracle. Our new method can be applied to the problems with composite structure, stochastic inexact oracle and allows using non-Euclidean setup. We prove estimates for mean rate of convergence and probabilities of large deviations from this rate. Also we introduce two modifications of this method for strongly convex problems. For the first modification we prove mean rate of convergence estimates and for the second we prove estimates for large deviations from the mean rate of convergence. All the rates give the complexity estimates for proposed methods which up to multiplicative constant coincide with lower complexity bound for the considered class of convex composite optimization problems with stochastic inexact oracle

    Accelerated Methods for α\alpha-Weakly-Quasi-Convex Problems

    Full text link
    Many problems encountered in training neural networks are non-convex. However, some of them satisfy conditions weaker than convexity, but which are still sufficient to guarantee the convergence of some first-order methods. In our work we show that some previously known first-order methods retain their convergence rates under these weaker conditions
    • …
    corecore